2,612 research outputs found
An EPTAS for machine scheduling with bag-constraints
Machine scheduling is a fundamental optimization problem in computer science.
The task of scheduling a set of jobs on a given number of machines and
minimizing the makespan is well studied and among other results, we know that
EPTAS's for machine scheduling on identical machines exist. Das and Wiese
initiated the research on a generalization of makespan minimization, that
includes so called bag-constraints. In this variation of machine scheduling the
given set of jobs is partitioned into subsets, so called bags. Given this
partition a schedule is only considered feasible when on any machine there is
at most one job from each bag.
Das and Wiese showed that this variant of machine scheduling admits a PTAS.
We will improve on this result by giving the first EPTAS for the machine
scheduling problem with bag-constraints. We achieve this result by using new
insights on this problem and restrictions given by the bag-constraints. We show
that, to gain an approximate solution, we can relax the bag-constraints and
ignore some of the restrictions. Our EPTAS uses a new instance transformation
that will allow us to schedule large and small jobs independently of each other
for a majority of bags. We also show that it is sufficient to respect the
bag-constraint only among a constant number of bags, when scheduling large
jobs. With these observations our algorithm will allow for some conflicts when
computing a schedule and we show how to repair the schedule in polynomial-time
by swapping certain jobs around
Closing the Gap for Makespan Scheduling via Sparsification Techniques
Makespan scheduling on identical machines is one of the most basic and fundamental packing problem studied in the discrete optimization literature. It asks for an assignment of n jobs to a set of m identical machines that minimizes the makespan. The problem is strongly NPhard, and thus we do not expect a (1 + epsilon)-approximation algorithm with a running time that depends polynomially on 1/epsilon. Furthermore, Chen et al. [Chen/JansenZhang, SODA\u2713] recently showed that a running time of 2^{1/epsilon}^{1-delta} + poly(n) for any delta > 0 would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on 1/epsilon, the better of which achieves a running time of 2^{~O(1/epsilon^{2})} + O(n*log(n)) [Jansen, SIAM J. Disc. Math. 2010]. In this paper we obtain an algorithm with a running time of 2^{~O(1/epsilon)} + O(n*log(n)), which is tight under ETH up to logarithmic factors on the exponent.
Our main technical contribution is a new structural result on the configuration-IP. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings.
In particular, we show how the structure can be applied to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases we obtain an efficient PTAS with running time 2^{~O(1/epsilon)} + poly(n)
Faster Algorithms for Integer Programs with Block Structure
We consider integer programming problems where has a (recursive)
block-structure generalizing "-fold integer programs" which recently
received considerable attention in the literature. An -fold IP is an integer
program where consists of repetitions of submatrices on the top horizontal part and repetitions of a
matrix on the diagonal below the top part.
Instead of allowing only two types of block matrices, one for the horizontal
line and one for the diagonal, we generalize the -fold setting to allow for
arbitrary matrices in every block. We show that such an integer program can be
solved in time
(ignoring logarithmic factors). Here is an upper bound on the
largest absolute value of an entry of and is the largest
binary encoding length of a coefficient of . This improves upon the
previously best algorithm of Hemmecke, Onn and Romanchuk that runs in time
. In particular, our
algorithm is not exponential in the number of columns of and .
Our algorithm is based on a new upper bound on the -norm of an element
of the "Graver basis" of an integer matrix and on a proximity bound between the
LP and IP optimal solutions tailored for IPs with block structure. These new
bounds rely on the "Steinitz Lemma".
Furthermore, we extend our techniques to the recently introduced "tree-fold
IPs", where we again present a more efficient algorithm in a generalized
setting
An Algorithmic Theory of Integer Programming
We study the general integer programming problem where the number of
variables is a variable part of the input. We consider two natural
parameters of the constraint matrix : its numeric measure and its
sparsity measure . We show that integer programming can be solved in time
, where is some computable function of the
parameters and , and is the binary encoding length of the input. In
particular, integer programming is fixed-parameter tractable parameterized by
and , and is solvable in polynomial time for every fixed and .
Our results also extend to nonlinear separable convex objective functions.
Moreover, for linear objectives, we derive a strongly-polynomial algorithm,
that is, with running time , independent of the rest of
the input data.
We obtain these results by developing an algorithmic framework based on the
idea of iterative augmentation: starting from an initial feasible solution, we
show how to quickly find augmenting steps which rapidly converge to an optimum.
A central notion in this framework is the Graver basis of the matrix , which
constitutes a set of fundamental augmenting steps. The iterative augmentation
idea is then enhanced via the use of other techniques such as new and improved
bounds on the Graver basis, rapid solution of integer programs with bounded
variables, proximity theorems and a new proximity-scaling algorithm, the notion
of a reduced objective function, and others.
As a consequence of our work, we advance the state of the art of solving
block-structured integer programs. In particular, we develop near-linear time
algorithms for -fold, tree-fold, and -stage stochastic integer programs.
We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified
proximity-scaling algorith
Fully Dynamic Bin Packing Revisited
We consider the fully dynamic bin packing problem, where items arrive and
depart in an online fashion and repacking of previously packed items is
allowed. The goal is, of course, to minimize both the number of bins used as
well as the amount of repacking. A recently introduced way of measuring the
repacking costs at each timestep is the migration factor, defined as the total
size of repacked items divided by the size of an arriving or departing item.
Concerning the trade-off between number of bins and migration factor, if we
wish to achieve an asymptotic competitive ration of for the
number of bins, a relatively simple argument proves a lower bound of
for the migration factor. We establish a nearly
matching upper bound of using
a new dynamic rounding technique and new ideas to handle small items in a
dynamic setting such that no amortization is needed. The running time of our
algorithm is polynomial in the number of items and in .
The previous best trade-off was for an asymptotic competitive ratio of
for the bins (rather than ) and needed an amortized
number of repackings (while in our scheme the number of repackings
is independent of and non-amortized)
Complexity Bounds for Block-IPs
We consider integer programs (IPs) with a certain block structure, called two-stage stochastic. A two-stage stochastic IP is an integer program of the form where the constraint matrix consists of blocks on a vertical line and blocks on the diagonal line aside. We improve the bound for the Graver complexity of two-stage stochastic IPs. Our bound of reduces the dependency from to and is asymptotically tight under the exponential time hypothesis in the case that . The improved Graver complexity bound stems from improved bounds on the intersection for a class of structurally rich integer cones. Our bound of for dimension and absolute entries bounded by is independent of the number of intersected integer cones. We investigate special properties of this class, which is complemented by the fact that these properties do not hold for general integer cones. Moreover, we give structural characterizations of this class that admit their use for two-stage stochastic IPs
- …